Implement IngesterAffinity broadcast#6152
Implement IngesterAffinity broadcast#6152nadav-govari merged 8 commits intonadav/feature-node-based-routingfrom
Conversation
There was a problem hiding this comment.
We already use the word affinity for searchers split affinity. I think we can find another ok name for this metric that we don't use already.
There was a problem hiding this comment.
Yep, how's ingester capacity? As in, literally the capacity of the ingester to ingest new requests.
There was a problem hiding this comment.
Renamed the task to BroadcastIngesterCapacity and all references from affinity to capacity.
There was a problem hiding this comment.
This could use a comment. I assume you had a duration in mind for that window and then divided by BROADCAST_INTERVAL_PERIOD to get to 6. What's that window duration?
There was a problem hiding this comment.
Adding. It was meant to be 30 seconds.
There was a problem hiding this comment.
There's a better implementation of a timeserie based on a rotating time window in broadcast. This is a common pattern. So, move the og implementation in common. Abstractify enough so it can be used for both uses cases, import and use it here.
There was a problem hiding this comment.
LocalShardUpdate and BroadcastIngesterCapacity now both use this new RingBuffer, which is in quickwit-common.
There was a problem hiding this comment.
mem or disk? the name should say it.
There was a problem hiding this comment.
Disk, modified.
There was a problem hiding this comment.
Use expect and state the invariant/conditation that allow you to call expect safely:
.expect("window should not be empty")
.expect("window should have more than 1 measurements")
There was a problem hiding this comment.
Noted, though this isn't relevant any longer with the RingBuffer changes.
There was a problem hiding this comment.
Just lock the whole thing fully and make the code more readable.
There was a problem hiding this comment.
The WAL can take multiple BROADCAST_INTERVAL_PERIOD intervals to load. The task should not not stop when we're loading the WAL, only if the state is dropped.
There was a problem hiding this comment.
Updated to the following cases:
- State dropped: error, stop task
- Ingester not initialized: no-op
- Ingester ready: happy path
There was a problem hiding this comment.
You can't broadcast that over a single key because the open shard counts can be very long.
-> one key per index/source
There was a problem hiding this comment.
(The value length is an issue because chitchat uses UDP and every update must fit in a single datagram (MTU))
There was a problem hiding this comment.
Made it similar to LocalShardsUpdate, one key per index/source.
There was a problem hiding this comment.
| .filter(|shard| shard.is_open()) | |
| .filter(|shard| shard.is_advertisable && !shard.is_replica() && shard.is_open()) |
There was a problem hiding this comment.
| pub fn oldest(&self) -> Option<T> { | |
| pub fn front(&self) -> Option<T> { |
There was a problem hiding this comment.
| pub fn push(&mut self, value: T) { | |
| pub fn push_back(&mut self, value: T) { |
Let's just copy (half of) the VecDeque API.
| /// Elements are stored in a flat array of size `N` and rotated on each push. | ||
| /// The newest element is always at position `N - 1` (the last slot), and the | ||
| /// oldest is at position `N - len`. | ||
| pub struct RingBuffer<T: Copy + Default, const N: usize> { |
There was a problem hiding this comment.
I thought we discussed using memory?
There was a problem hiding this comment.
Yeah, now that I realize they're capped in the chart, I think they're functionally the same, but memory feels like a cleaner number to read. So I switched it to memory.
| /// Elements are stored in a flat array of size `N` and rotated on each push. | ||
| /// The newest element is always at position `N - 1` (the last slot), and the | ||
| /// oldest is at position `N - len`. | ||
| pub struct RingBuffer<T: Copy + Default, const N: usize> { |
There was a problem hiding this comment.
Claude can easily make push O(1), right?
There was a problem hiding this comment.
Yes it can :)
There was a problem hiding this comment.
| pub const INGESTER_CAPACITY_PREFIX: &str = "ingester.capacity:"; | |
| pub const INGESTER_CAPACITY_SCORE_PREFIX: &str = "ingester.capacity_score:"; |
| @@ -0,0 +1,457 @@ | |||
| // Copyright 2021-Present Datadog, Inc. | |||
There was a problem hiding this comment.
Let's use capacity_score everywhere.
| /// Takes a snapshot of the primary shards hosted by the ingester at regular intervals and | ||
| /// broadcasts it to other nodes via Chitchat. | ||
| pub(super) struct BroadcastLocalShardsTask { | ||
| pub struct BroadcastLocalShardsTask { |
There was a problem hiding this comment.
| pub struct BroadcastLocalShardsTask { | |
| pub(crate) struct BroadcastLocalShardsTask { |
76cfc84
into
nadav/feature-node-based-routing
Background
Main idea: https://docs.google.com/document/d/1XUpBdMFnuX8d23erK-XwQkomRgbeRTJ0TJtve7RGW3k/edit?tab=t.0.
All work on this feature will be merged PR by PR into the base branch nadav/feature-node-based-routing, which will then eventually be merged into main once it's fully ready.
PR Description
Creates a new broadcast to prepare for node based routing. The idea is described more in depth in
The primary thinking here is:
Ingesters will move away from keeping shard level data, and instead keep this node level data for routing requests. Routing tables will move to be node based and use the data from these broadcasts to update their routing tables.